42 research outputs found

    Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    Get PDF
    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of FILT in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of FILT to be consistent with that of the highly efficient E-learning Chronotron, but with the distinct advantage that FILT is also implementable as an online method for increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and incorporates reviewer comment

    An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network

    Get PDF
    Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. The computational efficiency of the proposed algorithm far exceeds existing polychronous group detection methods and is well suited for online detection.Comment: 17 pages, 8 figure

    Biologically inspired temporal sequence learning

    Get PDF
    We propose a temporal sequence learning model in spiking neural networks consisting of Izhikevich spiking neurons.In our reward-based learning model, we train a network to associate two stimuli with temporal delay and a target response. Learning rule is dependent on reward signals that modulate the weight changes derived from spike-timing dependent plasticity (STDP) function.The dynamic properties of our model can be attributed to the sparse and recurrent connectivity, synaptic transmission delays, background activity and inter-stimulus interval (ISI).We have tested the learning in visual recognition task, and temporal AND and XOR problems.The network can be trained to associate a stimulus pair with its target response and to discriminate the temporal sequence of the stimulus presentation

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure

    Editorial: Machine Learning in Natural Complex Systems

    Get PDF
    International audienc

    Elman backpropagation as reinforcement for simple recurrent networks

    Get PDF
    Simple recurrent networks (SRNs) in symbolic time series prediction (e. g. language processing models) are frequently trained with gradient descent based learning algorithms, notably with variants of backpropagation (BP). A major drawback for the cognitive plausibility of BP is that it is a supervised scheme in which a teacher has to provide a fully specified target answer. Yet, agents in natural environments often receive a summary feedback about the degree of success or failure only, a view adopted in reinforcement learning schemes. In this work we show that for SRNs in prediction tasks for which there is a probability interpretation of the network’s output vector, Elman BP can be reimplemented as a reinforcement learning (RL) scheme for which the expected weight updates agree with the ones from traditional Elman BP. Network simulations on formal languages corroborate this result and show that the learning behaviours of Elman backpropagation (BP) and its reinforcement variant are very similar also in online learning tasks
    corecore